42 research outputs found
Solving Hard Stable Matching Problems Involving Groups of Similar Agents
Many important stable matching problems are known to be NP-hard, even when
strong restrictions are placed on the input. In this paper we seek to identify
structural properties of instances of stable matching problems which will allow
us to design efficient algorithms using elementary techniques. We focus on the
setting in which all agents involved in some matching problem can be
partitioned into k different types, where the type of an agent determines his
or her preferences, and agents have preferences over types (which may be
refined by more detailed preferences within a single type). This situation
would arise in practice if agents form preferences solely based on some small
collection of agents' attributes. We also consider a generalisation in which
each agent may consider some small collection of other agents to be
exceptional, and rank these in a way that is not consistent with their types;
this could happen in practice if agents have prior contact with a small number
of candidates. We show that (for the case without exceptions), several
well-studied NP-hard stable matching problems including Max SMTI (that of
finding the maximum cardinality stable matching in an instance of stable
marriage with ties and incomplete lists) belong to the parameterised complexity
class FPT when parameterised by the number of different types of agents needed
to describe the instance. For Max SMTI this tractability result can be extended
to the setting in which each agent promotes at most one `exceptional' candidate
to the top of his/her list (when preferences within types are not refined), but
the problem remains NP-hard if preference lists can contain two or more
exceptions and the exceptional candidates can be placed anywhere in the
preference lists, even if the number of types is bounded by a constant.Comment: Results on SMTI appear in proceedings of WINE 2018; Section 6
contains work in progres
Preference Elicitation in Matching Markets Via Interviews: A Study of Offline Benchmarks
In this paper we study two-sided matching markets in which the participants do not fully know their preferences and need to go through some costly deliberation process in order to learn their preferences. We assume that such deliberations are carried out via interviews, thus the problem is to find a good strategy for interviews to be carried out in order to minimize their use, whilst leading to a stable matching. One way to evaluate the performance of an interview strategy is to compare it against a nave ïalgorithm that conducts all interviews. We argue however that a more meaningful comparison would be against an optimal offline algorithm that has access to agents' preference orderings under complete information. We show that, unless P=NP, no offline algorithm can compute the optimal interview strategy in polynomial time. If we are additionally aiming for a particular stable matching, we provide restricted settings under which efficient optimal offline algorithms exist
Preference Elicitation in Matching Markets Via Interviews: A Study of Offline Benchmarks
The stable marriage problem and its extensions have been
extensively studied, with much of the work in the literature
assuming that agents fully know their own preferences over
alternatives. This assumption however is not always practical
(especially in large markets) and agents usually need
to go through some costly deliberation process in order to
learn their preferences. In this paper we assume that such
deliberations are carried out via interviews, where an interview
involves a man and a woman, each of whom learns
information about the other as a consequence. If everybody
interviews everyone else, then clearly agents can fully learn
their preferences. But interviews are costly, and we may
wish to minimize their use. It is often the case, especially
in practical settings, that due to correlation between agents’
preferences, it is unnecessary for all potential interviews to
be carried out in order to obtain a stable matching. Thus
the problem is to find a good strategy for interviews to be
carried out in order to minimize their use, whilst leading to a
stable matching. One way to evaluate the performance of an
interview strategy is to compare it against a na¨ıve algorithm
that conducts all interviews. We argue however that a more
meaningful comparison would be against an optimal offline
algorithm that has access to agents’ preference orderings under
complete information. We show that, unless P=NP, no
offline algorithm can compute the optimal interview strategy
in polynomial time. If we are additionally aiming for a
particular stable matching (perhaps one with certain desirable
properties), we provide restricted settings under which
efficient optimal offline algorithms exist
Pareto Optimal Allocation under Uncertain Preferences
The assignment problem is one of the most well-studied settings in social
choice, matching, and discrete allocation. We consider the problem with the
additional feature that agents' preferences involve uncertainty. The setting
with uncertainty leads to a number of interesting questions including the
following ones. How to compute an assignment with the highest probability of
being Pareto optimal? What is the complexity of computing the probability that
a given assignment is Pareto optimal? Does there exist an assignment that is
Pareto optimal with probability one? We consider these problems under two
natural uncertainty models: (1) the lottery model in which each agent has an
independent probability distribution over linear orders and (2) the joint
probability model that involves a joint probability distribution over
preference profiles. For both of the models, we present a number of algorithmic
and complexity results.Comment: Preliminary Draft; new results & new author
Size versus truthfulness in the House Allocation problem
We study the House Allocation problem (also known as the Assignment problem),
i.e., the problem of allocating a set of objects among a set of agents, where
each agent has ordinal preferences (possibly involving ties) over a subset of
the objects. We focus on truthful mechanisms without monetary transfers for
finding large Pareto optimal matchings. It is straightforward to show that no
deterministic truthful mechanism can approximate a maximum cardinality Pareto
optimal matching with ratio better than 2. We thus consider randomised
mechanisms. We give a natural and explicit extension of the classical Random
Serial Dictatorship Mechanism (RSDM) specifically for the House Allocation
problem where preference lists can include ties. We thus obtain a universally
truthful randomised mechanism for finding a Pareto optimal matching and show
that it achieves an approximation ratio of . The same bound
holds even when agents have priorities (weights) and our goal is to find a
maximum weight (as opposed to maximum cardinality) Pareto optimal matching. On
the other hand we give a lower bound of on the approximation
ratio of any universally truthful Pareto optimal mechanism in settings with
strict preferences. In the case that the mechanism must additionally be
non-bossy with an additional technical assumption, we show by utilising a
result of Bade that an improved lower bound of holds. This
lower bound is tight since RSDM for strict preference lists is non-bossy. We
moreover interpret our problem in terms of the classical secretary problem and
prove that our mechanism provides the best randomised strategy of the
administrator who interviews the applicants.Comment: To appear in Algorithmica (preliminary version appeared in the
Proceedings of EC 2014
Pareto Optimal Allocation under Compact Uncertain Preferences
The assignment problem is one of the most well-studied settings in multi-agent resource allocation. Aziz, de Haan, and Rastegari (2017) considered this problem with the additional feature that agents’ preferences involve uncertainty. In particular, they considered two uncertainty models neither of which is necessarily compact. In this paper, we focus on three uncertain preferences models whose size is polynomial in the number of agents and items. We consider several interesting computational questions with regard to Pareto optimal assignments. We also present some general characterization and algorithmic results that apply to large classes of uncertainty models
Pareto optimal matchings in many-to-many markets with ties
We consider Pareto optimal matchings (POMs) in a many-to-many market of applicants
and courses where applicants have preferences, which may include ties, over
individual courses and lexicographic preferences over sets of courses. Since this is the
most general setting examined so far in the literature, our work unifies and generalizes
several known results. Specifically, we characterize POMs and introduce the Generalized
Serial Dictatorship Mechanism with Ties (GSDT) that effectively handles ties
via properties of network flows. We show that GSDT can generate all POMs using
different priority orderings over the applicants, but it satisfies truthfulness only for
certain such orderings. This shortcoming is not specific to our mechanism; we show
that any mechanism generating all POMs in our setting is prone to strategic manipulation.
This is in contrast to the one-to-one case (with or without ties), for which
truthful mechanisms generating all POMs do exist
Linear time algorithm for parsing RNA secondary structure
RNA secondary structure prediction is an important problem in computational
molecular biology. Experiments show that existing polynomial time prediction algorithms
have limited success in predicting correctly the base pairs, i.e. secondary structure, in known
biological RNA structures. One limitation of many current algorithms is that they can predict
only restricted classes of structures, excluding many so-called pseudoknotted secondary
structures. The type of the pseudoknotted structures that occur in biological structures, as
well as the type of structures handled by current algorithms, have been poorly understood,
making it difficult to assess the generality of current algorithms.
In this thesis we present a comprehensive and precise classification of structural
elements and loops in a secondary structure, along with a linear time algorithm for parsing
secondary structures into their structural elements.
The parsing algorithm, along with the classification scheme for the loops in a pseudoknotted
secondary structure, can be used in analysing existing prediction algorithms to
determine which known biological RNA structures can not be predicted by the algorithms.
This analysis can help us to design new and more powerful prediction algorithms.
Furthermore, we present two applications of our work: (i) linear time free energy
calculation algorithm, and (ii) linear time test for Akutsu's[2] algorithm class.
We present a linear time algorithm for calculating the free energy of a given secondary
structure. This algorithm can be useful especially in heuristic prediction algorithms, as they
commonly use a procedure to calculate the free energy for a given sequence and structures.
We also present a linear time algorithm to test whether the prediction algorithm
introduced by Akutsu[2] can handle a given structure. The result of our analysis on algorithm
of Akutsu on some sets of biological structures shows that although it is proved
theoretically that the class of structures handled by Akutsu's algorithm is more general
than that handled by the algorithm of Dirks and Pierce [7], they can both handle the same
class of given biological structures.Science, Faculty ofComputer Science, Department ofGraduat